The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Recently, segmentation-based methods are quite popular in scene text detection, which mainly contain two steps: text kernel segmentation and expansion. However, the segmentation process only considers each pixel independently, and the expansion process is difficult to achieve a favorable accuracy-speed trade-off. In this paper, we propose a Context-aware and Boundary-guided Network (CBN) to tackle these problems. In CBN, a basic text detector is firstly used to predict initial segmentation results. Then, we propose a context-aware module to enhance text kernel feature representations, which considers both global and local contexts. Finally, we introduce a boundary-guided module to expand enhanced text kernels adaptively with only the pixels on the contours, which not only obtains accurate text boundaries but also keeps high speed, especially on high-resolution output maps. In particular, with a lightweight backbone, the basic detector equipped with our proposed CBN achieves state-of-the-art results on several popular benchmarks, and our proposed CBN can be plugged into several segmentation-based methods. Code will be available on https://github.com/XiiZhao/cbn.pytorch.
translated by 谷歌翻译
Point cloud registration is a popular topic which has been widely used in 3D model reconstruction, location, and retrieval. In this paper, we propose a new registration method, KSS-ICP, to address the rigid registration task in Kendall shape space (KSS) with Iterative Closest Point (ICP). The KSS is a quotient space that removes influences of translations, scales, and rotations for shape feature-based analysis. Such influences can be concluded as the similarity transformations that do not change the shape feature. The point cloud representation in KSS is invariant to similarity transformations. We utilize such property to design the KSS-ICP for point cloud registration. To tackle the difficulty to achieve the KSS representation in general, the proposed KSS-ICP formulates a practical solution that does not require complex feature analysis, data training, and optimization. With a simple implementation, KSS-ICP achieves more accurate registration from point clouds. It is robust to similarity transformation, non-uniform density, noise, and defective parts. Experiments show that KSS-ICP has better performance than the state of the art.
translated by 谷歌翻译
基于卷积的方法在医疗图像分割任务中提供了良好的分割性能。但是,这些方法在处理医学图像的边缘时面临以下挑战:(1)以前的基于卷积的方法不关注分割边缘周围前景和背景之间的边界关系,从而导致分割性能的退化当边缘变化时。 (2)卷积层的电感偏置不能适应复杂的边缘变化和多分段区域的聚合,从而导致其性能改善大部分仅限于分割分段区域而不是边缘的范围。为了应对这些挑战,我们提出了MFI(多尺度特征交互)块和英亩(轴向上下文关系编码器)块上的CM-MLP框架,以精确分割医疗图像的边缘。在MFI块中,我们建议级联多尺度MLP(Cascade MLP)同时从网络的较深层中处理所有局部信息,并利用CASCADE多尺度机制逐渐融合离散的本地信息。然后,英亩块用于使深度监督着眼于探索前景和背景之间的边界关系以修改医疗图像的边缘。我们提议的CM-MLP框架的分割准确性(DICE)达到96.96%,96.76%和82.54%的三个基准数据集:CVC-ClinicDB数据集,Sub-Kvasir Dataset和我们的内部数据集,这些数据集分别超过了。最先进的方法。源代码和训练有素的模型将在https://github.com/programmerhyy/cm-mlp上找到。
translated by 谷歌翻译
几次动作识别中面临的主要挑战是培训视频数据不足。为了解决此问题,该领域中的当前方法主要集中于在功能级别上设计算法,而对处理输入视频数据的关注很少。此外,现有的框架采样策略可能会省略时间和空间维度的关键行动信息,从而进一步影响视频利用效率。在本文中,我们提出了一个新颖的视频框架采样器,以进行几次动作识别以解决此问题,其中特定于任务的空间框架采样是通过时间选择器(TS)和空间放大器(SA)实现的。具体而言,我们的采样器首先以较小的计算成本扫描整个视频,以获得对视频帧的全球感知。 TS在选择最显着,随后的贡献的顶级框架方面发挥了作用。 SA通过使用显着图的指导来扩大关键区域来强调每个框架的歧视性信息。我们进一步采用任务自适应学习,根据手头的情节任务动态调整采样策略。 TS和SA的实现均可以端到端的优化为基础,从而通过大多数少数发动的动作识别方法促进了我们所提出的采样器的无缝集成。广泛的实验表明,在包括长期视频在内的各种基准测试中的表演都有显着提高。
translated by 谷歌翻译
双重编码器结构成功地利用了两个特定语言的编码器(LSE)进行代码转换语音识别。由于LSE由两个预训练的语言特定模型(LSM)初始化,因此双编码器结构可以利用足够的单语言数据并捕获单个语言属性。但是,现有方法对LSE的语言没有限制,并且不足以针对LSM的语言知识。在本文中,我们提出了一种特定语言的特征辅助(LSCA)方法来减轻上述问题。具体来说,在培训期间,我们引入了两种特定语言的损失作为语言限制,并为其生成相应的语言目标。在解码过程中,我们通过组合两个LSM和混合模型的输出概率来考虑LSM的解码能力,以获得最终预测。实验表明,LSCA的训练或解码方法可以改善模型的性能。此外,通过组合LSCA的训练和解码方法,最佳结果可以在代码切换测试集上获得多达15.4%的相对误差。此外,该系统可以通过使用我们的方法来很好地处理代码转换语音识别任务,而无需额外的共享参数,甚至可以基于两个预训练的LSM进行重新训练。
translated by 谷歌翻译
自然语言伯特以自我监督的方式用语言语料库培训。与自然语言贝尔有不同,Vision语言伯特需要将配对的数据带到训练,这限制了VL-BERT预制的规模。我们提出了一种自我训练方法,允许从未标记的图像数据训练VL-BERT。所提出的方法从我们统一的条件模型开始 - 一个可以执行零拍条件生成的视觉语言BERT模型。给定不同的条件,统一的条件模型可以生成标题,密集的标题,甚至是问题。我们使用标记的图像数据来训练教师模型,并使用训练模型在未标记的图像数据上生成伪字幕。然后,我们将标记的数据和伪标记数据组合以培训学生模型。通过将学生模型作为新老师提出该过程。通过使用拟议的自我训练方法,只有300k未标记的额外数据,我们能够与培训300万额外的图像数据培训的类似型号尺寸的模型相比,我们能够获得竞争或更好的表演。
translated by 谷歌翻译
建设通用机器人在人类水平的各种环境中对大量的任务进行众所周知的复杂。它需要机器人学习是采样的,更概括的,可概括的,组成和增量。在这项工作中,我们介绍了一个称为SAGCI-System的系统学习框架,实现了超过四种要求。我们的系统首先采用由安装在机器人手腕上的摄像机收集的原始点云作为输入,并产生所代表为URDF的周围环境的初始建模。我们的系统采用了一个加载URDF的学习增强的可分辨率模拟。然后,机器人利用交互式感知来与环境交互,并修改URDF。利用模拟,我们提出了一种新的基于模型的RL算法,这些RL算法结合了以上的对象和机器人为中心的方法,以有效地产生完成操纵任务的策略。我们应用我们的系统,以进行仿真和现实世界的铰接物体操纵。广泛的实验表明了我们提出的学习框架的有效性。 https://sites.google.com/view/egci提供了补充材料和视频。
translated by 谷歌翻译
对象视觉导航旨在基于代理的视觉观察来转向目标对象。非常希望合理地感知环境并准确控制代理。在导航任务中,我们引入了一个以代理为中心的关系图(ACRG),用于基于环境中的关系学习视觉表示。 ACRG是一种高效且合理的结构,包括两个关系,即物体之间的关系以及代理与目标之间的关系。一方面,我们设计了存储物体之间的相对水平位置的对象水平关系图(OHRG)。请注意,垂直关系不涉及OHRG,我们认为OHRG适合控制策略。另一方面,我们提出了代理 - 目标深度关系图(ATDRG),使代理能够将距离视为目标的距离。为了实现ATDRG,我们利用图像深度来表示距离。鉴于上述关系,代理可以察觉到环境和输出导航操作。鉴于ACRG和位置编码的全局功能构造的可视表示,代理可以捕获目标位置以执行导航操作。人工环境中的实验结果AI2-Thor表明ACRG显着优于看不见的检测环境中的其他最先进的方法。
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译